37 research outputs found

    Circus in Motion: A Multimodal Exergame Supporting Vestibular Therapy for Children with Autism

    Get PDF
    Exergames are serious games that involve physical exertion and are thought of as a form of exercise by using novel input models. Exergames are promising in improving the vestibular differences of children with autism but often lack of adaptation mechanisms that adjust the difficulty level of the exergame. In this paper, we present the design and development of Circus in Motion, a multimodal exergame supporting children with autism with the practice of non-locomotor movements. We describe how the data from a 3D depth camera enables the tracking of non-locomotor movements allowing children to naturally interact with the exergame . A controlled experiment with 12 children with autism shows Circus in Motion excels traditional vestibular therapies in increasing physical activation and the number of movements repetitions. We show how data from real-time usage of Circus in Motion could be used to feed a fuzzy logic model that can adjust the difficulty level of the exergame according to each childs motor performance. We close discussing open challenges and opportunities of multimodal exergames to support motor therapeutic interventions for children with autism in the long-term

    Feel and Touch: A Haptic Mobile Game to Assess Tactile Processing

    Get PDF
    Haptic interfaces have great potential for assessing the tactile processing of children with Autism Spectrum Disorder (ASD), an area that has been under-explored due to the lack of tools to assess it. Until now, haptic interfaces for children have mostly been used as a teaching or therapeutic tool, so there are still open questions about how they could be used to assess tactile processing of children with ASD. This article presents the design process that led to the development of Feel and Touch, a mobile game augmented with vibrotactile stimuli to assess tactile processing. Our feasibility evaluation, with 5 children from 3 to 6 years old, shows that children accept vibrations and are able to use the proposed vibrotactile patterns. However, it is still necessary to work on the instructions to make the game dynamic clearer and rewards to keep the attention of children. We close this article by discussing future work and conclusions

    Furthering Development of Smart Fabrics to Improve the Accessibility of Music Therapy

    Get PDF
    In this paper, we present the design and development of HarmonicThreads, a smart, cost-effective fabric augmented by generative machine learning algorithms to create music in real time according to the user\u27s interaction. In this manner, we hypothesize that individuals with sensory differences could take advantage of the fabric\u27s flexibility, the music will adapt according to users\u27 interaction, and the affordable hardware we propose will make it more accessible. We follow a design thinking methodology using data from a multidisciplinary team in Mexico and the United States. Then we will close this paper by discussing challenges in developing accessible smart fabrics in different contexts

    BendableSound: An Elastic Multisensory Surface Using Touch-based interactions to Assist Children with Severe Autism During Music Therapy

    Get PDF
    Neurological Music Therapy uses live music to improve the sensorimotor regulation of children with severe autism. However, they often lack musical training and their impairments limit their interactions with musical instruments. In this paper, we present our co-design work that led to the BendableSound prototype: an elastic multisensory surface encouraging users to practice coordination movements when touching a fabric to play sounds. We present the results of a formative study conducted with 18 teachers showing BendableSound was perceived as ā€œusableā€ and ā€œattractiveā€. Then, we present a deployment study with 24 children with severe autism showing BendableSound is ā€œeasy to useā€ and may potentially have therapeutic benefits regarding attention and motor development. We propose a set of design insights that could guide the design of natural user interfaces, particularly elastic multisensory surfaces. We close with a discussion and directions for future work

    Analyzing Speech Recognition for Individuals with Down Syndrome

    Get PDF
    With the increment of voice assistants, speech recognition technologies have been used to support natural language processing. However, there are limitations on how well the technologies perform depending on who the users are. They have been predominantly trained on ā€œtypical speechā€ patterns, leaving aside people with disabilities with unique speech patterns. More specifically, people with Down Syndrome are having trouble using speech recognition technology due to their differences in speech. To develop a more accessible voice assistant, this project aims to characterize the speech recognition from individuals with Down Syndrome. To accomplish this aim, we analyze the quality of transcripts generated by two popular algorithms used for speech recognition (IBM and Google) to see the differences of speech from neurotypicals and people with Down Syndrome. We analyzed 7 videos of interviews between a neurotypical interviewer and Down Syndrome participants. We computed the symmetric differences between auto generated subtitles(IBM and youtube) and subtitles that were provided by humans (ground true) as well as the word error rate in all sentences. We found that current speech recognition algorithms donā€™t recognize Down Syndrome speeches as well as speeches from neurotypicals. We are currently analyzing the specific type of error. By finding the speech patterns for people with disabilities, speech recognition technologies will be more inclusive, and truly help those who need voice assistants the most

    Analysis of Speech-to-Text Algorithms in Recognizing Down Syndrome Conversations

    Get PDF
    Introduction: Speech-to-text technology has become key in supporting technologies such as voice assistants (e.g., Alexa, Siri). Unfortunately, some individuals with speech differences, such as accents, female voices, children, or individuals with disabilities such as Down Syndrome, are not well recognized, creating issues in inclusivity. The first step toward making it more inclusive is to figure out where the errors or weaknesses are in speech-to-text algorithms (YouTube, IBM, Zoom, and Azure) in recognizing dialogs from diverse populations. Methods: We analyze 10 videos from the ā€˜Special Books by Special Kidsā€™ YouTube channel. Videos include 15 people with Down Syndrome and 6 Neurotypicals. To compare how algorithms perform, we developed a python script to compute the word error rate, mismatch, insertion, and deletion. Results: Each algorithm did better for Neurotypicals than individuals with Down Syndrome by almost 40%. Overall, the most accurate algorithm was Azure for both Down Syndrome (46%) and Neurotypicals (87%). In general, all algorithms struggled the most with mismatching words, then deleting words, and the least common mistake was inserting words. Conclusion: Even though Azure is doing better than other algorithms, it still does not work well for Down Syndrome. To further understand the limitations and potential improvement of these algorithms, we propose a phonetic analysis to identify key sounds that prove difficult to detect in each algorithm. The end goal is to determine the best algorithm for analyzing speech from individuals with Down Syndrome and to ultimately provide an inclusive and more accurate algorithm. We are also planning to use estate of the art AI algorithms such as OpenAI and AssemblyAI. Acknowledgments: The first three authors equally contributed to this paper. We also thank Dr. Vivian Genaro Motti for her contributions to this research

    Interactions of children and young adults using large-scale elastic displays

    Get PDF
    Elastic displays provide a unique and intuitive interaction and could be deployed at large-scale. As an emerging technology, open questions about the benefits large-scale elastic displays offer over rigid displays and their potential application to our everyday lives. In this paper, we present an overview of a 4-year project. First, we describe the development of a large-scale elastic display, called BendableSound. Second, we explain the results of a laboratory study, showing the elastic display has a better user and sensory experience than a rigid one. Third, we describe the results of two deployment studies showing how BendableSound could support the therapeutic practices of children with autism and the early development of toddlers. We close discussing open challenges to study the untapped potential of elastic displays in pervasive computing

    When Worlds Collide: Boundary Management of Adolescent and Young Adult Childhood Cancer Survivors and Caregivers

    Get PDF
    Adolescent and young adult childhood cancer survivors experience health complications, late or long-term biomedical complications, as well as economic and psychosocial challenges that can have a lifelong impact on their quality-of-life. As childhood cancer survivors transition into adulthood, they must learn to balance their identity development with demands of everyday life and the near- and long-term consequences of their cancer experience, all of which have implications for the ways they use existing technologies and the design of novel technologies. In this study, we interviewed 24 childhood cancer survivors and six caregivers about their cancer survivorship experiences. The results of our analysis indicate that the challenges of transitioning to adulthood as a cancer survivor necessitate the development and management of multiple societal, relational, and personal boundaries, processes that social computing technologies can help or hinder. This paper contributes to the empirical understanding of adolescent and young adult cancer survivorsā€™ social experiences. We further contribute sociotechnical design provocations for researchers, designers, and community members to support survivors

    Interactive sonification to assist children with autism during motor therapeutic interventions

    Get PDF
    Interactive sonification is an effective tool used to guide individuals when practicing movements. Little research has shown the use of interactive sonification in supporting motor therapeutic interventions for children with autism who exhibit motor impairments. The goal of this research is to study if children with autism understand the use of interactive sonification during motor therapeutic interventions, its potential impact of interactive sonification in the development of motor skills in children with autism, and the feasibility of using it in specialized schools for children with autism. We conducted two deployment studies in Mexico using Go-with-the-Flow, a framework to sonify movements previously developed for chronic pain rehabilitation. In the first study, six children with autism were asked to perform the forward reach and lateral upper-limb exercises while listening to three different sound structures (i.e., one discrete and two continuous sounds). Results showed that children with autism exhibit awareness about the sonification of their movements and engage with the sonification. We then adapted the sonifications based on the results of the first study, for motor therapy of children with autism. In the next study, nine children with autism were asked to perform upper-limb lateral, cross-lateral, and push movements while listening to five different sound structures (i.e., three discrete and two continues) designed to sonify the movements. Results showed that discrete sound structures engage the children in the performance of upper-limb movements and increase their ability to perform the movements correctly. We finally propose design considerations that could guide the design of projects related to interactive sonification

    Parentsā€™ Perspectives on a Smartwatch Intervention for Children with ADHD: Rapid Deployment and Feasibility Evaluation of a Pilot Intervention to Support Distance Learning During COVID-19

    Get PDF
    Distance learning in response to the COVID-19 pandemic presented tremendous challenges for many families. Parents were expected to support childrenā€™s learning, often while also working from home. Students with Attention Deficit Hyperactivity Disorder (ADHD) are at particularly high risk for setbacks due to difficulties with organization and increased risk of not participating in scheduled online learning. This paper explores how smartwatch technology, including timing notifications, can support children with ADHD during distance learning due to COVID-19. We implemented a 6-week pilot study of a Digital Health Intervention (DHI) with ten families. The DHI included a smartwatch and a smartphone. Google calendars were synchronized across devices to guide children through daily schedules. After the sixth week, we conducted parent interviews to understand the use of smartwatches and the impact on childrenā€™s functioning, and we collected physiological data directly from the smartwatch. Our results demonstrated that children successfully adopted the use of the smartwatch, and parents believed the intervention was helpful, especially in supporting the development of organizational skills in their children. Overall, we illustrate how even simple DHIs, such as using smartwatches to promote daily organization and task completion, have the potential to support children and families, particularly during periods of distance learning. We include practical suggestions to help professionals teach children with ADHD to use smartwatches to improve organization and task completion, especially as it applies to supporting remote instruction
    corecore